404 research outputs found

    Fabrication and characterization of a polymeric nanofluidic device for DNA analysis

    Get PDF
    The growing needs for cheaper and faster sequencing of long biopolymers such as DNA and RNA have prompted the development of new technologies. Among the novel techniques for analyzing these biopolymers, an approach using nanochannel based fluidic devices is attractive because it is a label-free, amplification-free, single-molecule method that can be scaled for high-throughput analysis. Despite recent demonstrations of nanochannel based fluidic devices for analyzing physical properties of such biopolymers, most of the devices have been fabricated in inorganic materials such as silicon, silicon nitride and glass using expensive high end nanofabrication techniques such as focused ion beam and electron beam lithography. In order to use the nanochannel based fluidic devices for a variety of bioanalyses, it is imperative to develop a technology for low cost and high through fabrication of such devices and demonstrate the feasibility of the fabricated nanochannel based fluidic devices in obtaining information on biopolymers. We developed a low cost and high throughput method to build polymer-based nanofluidic devices with sub-100 nm nanochannels using direct imprinting into polymer substrates. Imprinting with the polymer stamps showed good replication fidelity for multiple replication processes, preventing damage of the expensive nanopatterned master and reducing undesirable deformation in the molded polymer substrate. This approach opened up a possibility to build cheap and disposable polymer nanofluidic devices for single molecule analysis. The ion transportation and DNA motion in nanofluidic systems were studied. Simulation and experiment results indicate that fast degeneration of the electric field at micro/nano interface plays a major role, in addition to the bulk flow in the microfluidic networks. Inlet structures and bypass microchannels were designed and built, the use of which has proven to enable enhancing the DNA capture rate by over 500 %. Attributed to the improved capture rate, the blockade current of DNA translocation though a nanochannel was also measured. We observed in the current versus time curves both current increase and decrease in the existence of a DNA molecule in the nanochannel, which we attributed to the ion channel blockage and electrical double layer formed around the DNA molecule, respectively

    Quantum Metrology with Cold Atoms

    Full text link
    Quantum metrology is the science that aims to achieve precision measurements by making use of quantum principles. Attribute to the well-developed techniques of manipulating and detecting cold atoms, cold atomic systems provide an excellent platform for implementing precision quantum metrology. In this chapter, we review the general procedures of quantum metrology and some experimental progresses in quantum metrology with cold atoms. Firstly, we give the general framework of quantum metrology and the calculation of quantum Fisher information, which is the core of quantum parameter estimation. Then, we introduce the quantum interferometry with single and multiparticle states. In particular, for some typical multiparticle states, we analyze their ultimate precision limits and show how quantum entanglement could enhance the measurement precision beyond the standard quantum limit. Further, we review some experimental progresses in quantum metrology with cold atomic systems.Comment: 53 pages, 9 figures, revised versio

    Kibble-Zurek dynamics in an array of coupled binary Bose condensates

    Full text link
    Universal dynamics of spontaneous symmetry breaking is central to understanding the universal behavior of spontaneous defect formation in various system from the early universe, condensed-matter systems to ultracold atomic systems. We explore the universal real-time dynamics in an array of coupled binary atomic Bose-Einstein condensates in optical lattices, which undergo a spontaneous symmetry breaking from the symmetric Rabi oscillation to the broken-symmetry self-trapping. In addition to Goldstone modes, there exist gapped Higgs mode whose excitation gap vanishes at the critical point. In the slow passage through the critical point, we analytically find that the symmetry-breaking dynamics obeys the Kibble-Zurek mechanism. From the scalings of bifurcation delay and domain formation, we numerically extract two Kibble-Zurek exponents b1=ν/(1+νz)b_{1}=\nu/(1+\nu z) and b2=1/(1+νz)b_{2}=1/(1+\nu z), which give the static correlation-length critical exponent ν\nu and the dynamic critical exponent zz. Our approach provides an efficient way to simultaneous determination of the critical exponents ν\nu and zz for a continuous phase transition.Comment: 6 pages, 4 figures, accepted for publication in EPL (Europhysics Letters

    An Analysis and Defence of the Two-Dimensional Zombie Argument

    Get PDF
    This thesis presents an analysis and defence of the two-dimensional zombie argument against physicalism by David Chalmers (2009). Put simply, the zombie argument uses the conceivability of zombies in an attempt to defeat physicalism, where zombies are physical duplicates of humans that lack consciousness and physicalism is the thesis that everything is either physical or supervenes on the physical. Despite the zombie argument being one of the most influential contemporary anti-physicalist arguments, the two-dimensional zombie argument, which is a refined version of the original zombie argument, remains relatively unknown among philosophers. In this thesis, I aim to clarify the two-dimensional zombie argument and defend it against four of the recent objections. The thesis is divided into four chapters. Chapter 1 is dedicated to introducing the two-dimensional zombie argument and elucidating the key concepts involved in the argument. Chapter 2 aims to provide an overall summary of the past discussions of the argument by mentioning the major objections made to the argument and defences against these objections. In Chapter 3, I provide detailed defences against objections mentioned in three of the more recent papers: one by Phillip Goff and David Papineau (2014); one by Daniel Stoljar (2020); and one by Eugen Fischer and Justin Sytsma (2021). In Chapter 4, I share my speculations on how language and intuitions might be the roots of many disputes over the argument and how further progress could be made. At the end, I conclude that the two-dimensional zombie argument, despite the large number of objections to it, remains in a highly defensible position. Once the argument is properly understood, there seems to be a lack of knockdown objections. At the same time, further progress can still be made by eliminating verbal misunderstandings and attempting to justify the intuitions involved

    Model-Based Design for High-Performance Signal Processing Applications

    Get PDF
    Developing high-performance signal processing applications requires not only effective signal processing algorithms but also efficient software design methods that can take full advantage of the available processing resources. An increasingly important type of hardware platform for high-performance signal processing is a multicore central processing unit (CPU) combined with a graphics processing unit (GPU) accelerator. Efficiently coordinating computations on both the host (CPU) and device (GPU), and managing host-device data transfers are critical to utilizing CPU-GPU platforms effectively. However, such coordination is challenging for system designers, given the complexity of modern signal processing applications and the stringent constraints under which they must operate. Dataflow models of computation provide a useful framework for addressing this challenge. In such a modeling approach, signal processing applications are represented as directed graphs that can be viewed intuitively as high-level signal flow diagrams. The formal, high-level abstraction provided by dataflow principles provides a useful foundation to investigate model-based analysis and optimization for new challenges in design and implementation of signal processing systems. This thesis presents a new model-based design methodology and an evolution of three novel design tools. These contributions provide an automated design flow for high performance signal processing. The design flow takes high-level dataflow representations as input and systematically derives optimized implementations on CPU-GPU platforms. The proposed design flow and associated design methodology are inspired by a previously-developed application programming interface (API) called the Hybrid Task Graph Scheduler (HTGS). HTGS was developed for implementing scalable workflows for high-performance computing applications on compute nodes that have large numbers of processing cores, and that may be equipped with multiple GPUs. However, HTGS has a limitation due to its relatively loose use of dataflow techniques (or other forms of model-based design), which results in a significant designer effort being required to apply the provided APIs effectively. The main contributions of the thesis are summarized as follows: (1) Development of a companion tool to HTGS that is called the HTGS Model-based Engine (HMBE). HMBE introduces novel capabilities to automatically analyze application dataflow graphs and generate efficient schedules for these graphs through hybrid compile-time and runtime analysis. The systematic, model-based approaches provided by HMBE enable the automation of complex tasks that must be performed manually when using HTGS alone. We have demonstrated the effectiveness of HMBE and the associated model-based design methodology through extensive experiments involving two case studies: an image stitching application for large scale microscopy images, and a background subtraction application for multispectral video streams. (2) Integration of HMBE with HTGS to develop a new design tool for the design and implementation of high-performance signal processing systems. This tool, called HMBE-Integrated-HTGS (HI-HTGS), provides novel capabilities for model-based system design, memory management, and scheduling targeted to multicore platforms. HMBE takes as input a single- or multi-dimensional dataflow model of the given signal processing application. The tool then expands the dataflow model into an expanded representation that exposes more parallelism and provides significantly more detail on the interactions between different application tasks (dataflow actors). This expanded representation is derived by HI-HTGS at compile-time and provided as input to the HI-HTGS runtime system. The runtime system in turn applies the expanded representation to guide dynamic scheduling decisions throughout system execution. (3) Extension of HMBE to the class of CPU-GPU platforms motivated above. We call this new model-based design tool the CPU-GPU Model-Based Engine (CGMBE). CGMBE uses an unfolded dataflow graph representation of the application along with thread-pool-based executors, which are optimized for efficient operation on the targeted CPU-GPU platform. This approach automates complex aspects of the design and implementation process for signal processing system designers while maximizing the utilization of computational power, reducing the memory footprint for both the CPU and GPU, and facilitating experimentation for tuning performance-oriented designs

    Group B: Polar Coordinate Whiteboard Writer

    Get PDF
    From the initial funding of $250.00, our group attempted to make a polar coordinate whiteboard writer that was to be used in educational settings. Market for polar coordinate whiteboard writer is a blue ocean. Having a successful prototype will allow us to find a niche place in a market that shares border with the global education market

    Decidable Variable-Rate Dataflow for Heterogeneous Signal Processing Systems

    Get PDF
    Dynamic dataflow models of computation have become widely used through their adoption to popular programming frameworks such as TensorFlow and GNU Radio. Although dynamic dataflow models offer more programming freedom, they lack analyzability compared to their static counterparts (such as synchronous dataflow). In this paper we advocate the use of a boundedly dynamic dataflow model of computation, VR-PRUNE, that remains analyzable but still offers more programming freedom than a fully static dataflow model. The paper presents the VR-PRUNE model of computation and runtime,and illustrates its applicability to practical signal processing applications by two use cases: an adaptive convolutional neural network, and a predistortion filter for wireless communications. By runtime experiments on two heterogeneous computing platforms we show that VR-PRUNE is both flexible and efficient.©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed
    corecore